Approximate Dynamic Programming Strategy for Dual Adaptive Control

نویسندگان

  • Jong Min Lee
  • Jay H. Lee
چکیده

An approximate dynamic programming (ADP) strategy for a dual adaptive control problem is presented. An optimal control policy of a dual adaptive control problem can be derived by solving a stochastic dynamic programming problem, which is computationally intractable using conventional solution methods that involve sampling of a complete hyperstate space. To solve the problem in a computationally amenable manner, we perform closed-loop simulations with different control policies to generate a data set that defines a subset of a hyperstate within which the Bellman equation is iterated. A local approximator with a penalty function is designed for estimation of cost-to-go values over the continuous hyperstate space. An integrating process with an unknown gain is used for illustration. Copyright c ©2005 IFAC

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dual Heuristic Programming for Fuzzy Control

Overview material for the Special Session (Tuning Fuzzy Controllers Using Adaptive Critic Based Approximate Dynamic Programming) is provided. The Dual Heuristic Programming (DHP) method of Approximate Dynamic Programming is described and used to the design a fuzzy control system. DHP and related techniques have been developed in the neurocontrol context but can be equally productive when used w...

متن کامل

A Multi-Stage Single-Machine Replacement Strategy Using Stochastic Dynamic Programming

In this paper, the single machine replacement problem is being modeled into the frameworks of stochastic dynamic programming and control threshold policy, where some properties of the optimal values of the control thresholds are derived. Using these properties and by minimizing a cost function, the optimal values of two control thresholds for the time between productions of two successive nonco...

متن کامل

Stochastic Dynamic Programming with Markov Chains for Optimal Sustainable Control of the Forest Sector with Continuous Cover Forestry

We present a stochastic dynamic programming approach with Markov chains for optimal control of the forest sector. The forest is managed via continuous cover forestry and the complete system is sustainable. Forest industry production, logistic solutions and harvest levels are optimized based on the sequentially revealed states of the markets. Adaptive full system optimization is necessary for co...

متن کامل

Adaptive Critic Based Approximate Dynamic Programming for Tuning Fuzzy Controllers

This work was supported by the National Science Foundation under grant ECS-9904378. Abstract: In this paper we show the applicability of the Dual Heuristic Programming (DHP) method of Approximate Dynamic Programming to parameter tuning of a fuzzy control system. DHP and related techniques have been developed in the neurocontrol context but can be equally productive when used with fuzzy controll...

متن کامل

An Introduction to Adaptive Critic Control: A Paradigm Based on Approximate Dynamic Programming

Adaptive critic control is an advanced control technology developed for nonlinear dynamical systems in recent years. It is based on the idea of approximate dynamic programming. Dynamic programming was introduced by Bellman in the 1950’s for solving optimal control problems of nonlinear dynamical systems. Due to its high computational complexity, applications of dynamic programming have been lim...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005